Goto

Collaborating Authors

 stein discrepancy






StochasticSteinDiscrepancies

Neural Information Processing Systems

Stein discrepancies (SDs) monitor convergence andnon-convergence inapprox-imate inference when exact integration and sampling are intractable. However,the computation of a Stein discrepancy can be prohibitive if the Stein operator - often a sum over likelihood terms or potentials - is expensive to evaluate.


Separation Results between Fixed-Kernel and Feature-Learning Probability Metrics

Neural Information Processing Systems

CIFAR-10 and MNIST datasets when using maximum mean discrepancy (MMD) with learned instead of fixed features. For a related method and in a similar spirit, Santos et al. (2019) show that for image




Efficient Learning of Stationary Diffusions with Stein-type Discrepancies

Bleile, Fabian, Lumpp, Sarah, Drton, Mathias

arXiv.org Machine Learning

Learning a stationary diffusion amounts to estimating the parameters of a stochastic differential equation whose stationary distribution matches a target distribution. We build on the recently introduced kernel deviation from stationarity (KDS), which enforces stationarity by evaluating expectations of the diffusion's generator in a reproducing kernel Hilbert space. Leveraging the connection between KDS and Stein discrepancies, we introduce the Stein-type KDS (SKDS) as an alternative formulation. We prove that a vanishing SKDS guarantees alignment of the learned diffusion's stationary distribution with the target. Furthermore, under broad parametrizations, SKDS is convex with an empirical version that is $ε$-quasiconvex with high probability. Empirically, learning with SKDS attains comparable accuracy to KDS while substantially reducing computational cost and yields improvements over the majority of competitive baselines.